在本文中,我们研究了基于骨架的动作识别的问题,该问题在学习从基础阶级到新颖类的可转移表示方面构成了独特的挑战,尤其是针对细粒度的动作。现有的元学习框架通常依赖于空间维度中的身体级表示,这限制了概括以捕获细粒标签空间中细微的视觉差异。为了克服上述局限性,我们提出了一种基于单发骨架的动作识别的部分感知的原型代表。我们的方法捕获了两个独特的空间级别的骨架运动模式,一种用于所有身体关节的全球环境,称为身体水平,另一个则参与了身体部位的局部空间区域,称为零件水平。我们还设计了一种类不足的注意机制,以突出每个动作类别的重要部分。具体而言,我们开发了一个由三个模块组成的零件感知原型图网络:我们的双层建模的级联嵌入模块,一个基于注意力的零件融合模块,用于融合零件并生成零件感知的原型,以及可以执行匹配的模块。与部分意识表示的分类。我们证明了我们方法对两个基于公共骨架的动作识别数据集的有效性:NTU RGB+D 120和NW-UCLA。
translated by 谷歌翻译
动作质量评估(AQA)对于理解和解决任务的行动质量评估至关重要,这是由于微妙的视觉差异引起的独特挑战。现有的最新方法通常依靠整体视频表示来进行分数回归或排名,这限制了概括以捕获细粒度内的内部变化。为了克服上述限制,我们提出了一个时间解析变压器将整体特征分解为时间零件级表示。具体而言,我们利用一组可学习的查询来表示特定动作的原子时间模式。我们的解码过程将框架表示形式转换为固定数量的时间订购的零件表示。为了获得质量分数,我们根据零件表示采用最新的对比回归。由于现有的AQA数据集不提供时间零件级标签或分区,因此我们提出了对解码器的交叉注意响应的两个新颖损失功能:排名损失,以确保可学习的查询以满足交叉注意的时间顺序,并稀疏损失。鼓励部分表示更具歧视性。广泛的实验表明,我们提出的方法的表现优于三个公共AQA基准的先前工作,这是相当大的余量。
translated by 谷歌翻译
建模各种时空依赖项是识别骨架序列中人类动作的关键。大多数现有方法过度依赖于遍历规则或图形拓扑的设计,以利用动态关节的依赖性,这是反映远处但重要的关节的关系不足。此外,由于本地采用的操作,因此在现有的工作中探索了重要的远程时间信息。为了解决这个问题,在这项工作中,我们提出了LSTA-Net:一种新型长期短期时空聚合网络,可以以时空的方式有效地捕获长/短距离依赖性。我们将我们的模型设计成纯粹的分解体系结构,可以交替执行空间特征聚合和时间特征聚合。为了改善特征聚合效果,还设计和采用了一种通道明智的注意机制。在三个公共基准数据集中进行了广泛的实验,结果表明,我们的方法可以在空间和时域中捕获长短短程依赖性,从而产生比其他最先进的方法更高的结果。代码可在https://github.com/tailin1009/lsta-net。
translated by 谷歌翻译
Dynamic Graph Neural Networks (DGNNs) have been broadly applied in various real-life applications, such as link prediction and pandemic forecast, to capture both static structural information and temporal characteristics from dynamic graphs. Combining both time-dependent and -independent components, DGNNs manifest substantial parallel computation and data reuse potentials, but suffer from severe memory access inefficiency and data transfer overhead under the canonical one-graph-at-a-time training pattern. To tackle the challenges, we propose PiPAD, a $\underline{\textbf{Pi}}pelined$ and $\underline{\textbf{PA}}rallel$ $\underline{\textbf{D}}GNN$ training framework for the end-to-end performance optimization on GPUs. From both the algorithm and runtime level, PiPAD holistically reconstructs the overall training paradigm from the data organization to computation manner. Capable of processing multiple graph snapshots in parallel, PiPAD eliminates the unnecessary data transmission and alleviates memory access inefficiency to improve the overall performance. Our evaluation across various datasets shows PiPAD achieves $1.22\times$-$9.57\times$ speedup over the state-of-the-art DGNN frameworks on three representative models.
translated by 谷歌翻译
最近,基于卷积神经网络(CNN)的合成孔径雷达(SAR)图像的变更检测方法已增加了研究的注意力。但是,现有的基于CNN的方法忽略了多层卷积之间的相互作用,并且涉及的预分类限制了网络优化。为此,我们提出了一个基于注意力的噪声网络,称为Lantnet。特别是,我们设计了一个层注意模块,该模块可以适应不同卷积层的特征。此外,我们设计了一个耐噪声损失函数,可有效抑制嘈杂标签的影响。因此,该模型对预制结果中的嘈杂标签不敏感。三个SAR数据集的实验结果表明,与几种最新方法相比,所提出的Lantnet性能更好。源代码可在https://github.com/summitgao/lantnet上找到
translated by 谷歌翻译
视觉问题应答(VQA)是一个具有挑战性的任务,在计算机视觉和自然语言处理领域中引起了越来越多的关注。然而,目前的视觉问题回答具有语言偏差问题,这减少了模型的稳健性,对视觉问题的实际应用产生了不利影响。在本文中,我们首次对该领域进行了全面的审查和分析,并根据三个类别对现有方法进行分类,包括增强视觉信息,弱化语言前瞻,数据增强和培训策略。与此同时,依次介绍相关的代表方法,依次汇总和分析。揭示和分类语言偏见的原因。其次,本文介绍了主要用于测试的数据集,并报告各种现有方法的实验结果。最后,我们讨论了该领域的可能的未来研究方向。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译